Chapter 1: About the project

The course description is available in mooc.helsinki.fi

Link to my github repository


Chapter 2: Regression and model validation

First look at data wrangling and analysis

Data wrangling

Reading the full data and naming it “learning2014”.

learning2014 <- read.table("http://www.helsinki.fi/~kvehkala/JYTmooc/JYTOPKYS3-data.txt", sep="\t", header = TRUE)

Data consists of 183 rows (observations) and 60 columns (variables).

dim(learning2014)
## [1] 183  60

The structure of the data:

str(learning2014)
## 'data.frame':    183 obs. of  60 variables:
##  $ Aa      : int  3 2 4 4 3 4 4 3 2 3 ...
##  $ Ab      : int  1 2 1 2 2 2 1 1 1 2 ...
##  $ Ac      : int  2 2 1 3 2 1 2 2 2 1 ...
##  $ Ad      : int  1 2 1 2 1 1 2 1 1 1 ...
##  $ Ae      : int  1 1 1 1 2 1 1 1 1 1 ...
##  $ Af      : int  1 1 1 1 1 1 1 1 1 2 ...
##  $ ST01    : int  4 4 3 3 4 4 5 4 4 4 ...
##  $ SU02    : int  2 2 1 3 2 3 2 2 1 2 ...
##  $ D03     : int  4 4 4 4 5 5 4 4 5 4 ...
##  $ ST04    : int  4 4 4 4 3 4 2 5 5 4 ...
##  $ SU05    : int  2 4 2 3 4 3 2 4 2 4 ...
##  $ D06     : int  4 2 3 4 4 5 3 3 4 4 ...
##  $ D07     : int  4 3 4 4 4 5 4 4 5 4 ...
##  $ SU08    : int  3 4 1 2 3 4 4 2 4 2 ...
##  $ ST09    : int  3 4 3 3 4 4 2 4 4 4 ...
##  $ SU10    : int  2 1 1 1 2 1 1 2 1 2 ...
##  $ D11     : int  3 4 4 3 4 5 5 3 4 4 ...
##  $ ST12    : int  3 1 4 3 2 3 2 4 4 4 ...
##  $ SU13    : int  3 3 2 2 3 1 1 2 1 2 ...
##  $ D14     : int  4 2 4 4 4 5 5 4 4 4 ...
##  $ D15     : int  3 3 2 3 3 4 2 2 3 4 ...
##  $ SU16    : int  2 4 3 2 3 2 3 3 4 4 ...
##  $ ST17    : int  3 4 3 3 4 3 4 3 4 4 ...
##  $ SU18    : int  2 2 1 1 1 2 1 2 1 2 ...
##  $ D19     : int  4 3 4 3 4 4 4 4 5 4 ...
##  $ ST20    : int  2 1 3 3 3 3 1 4 4 2 ...
##  $ SU21    : int  3 2 2 3 2 4 1 3 2 4 ...
##  $ D22     : int  3 2 4 3 3 5 4 2 4 4 ...
##  $ D23     : int  2 3 3 3 3 4 3 2 4 4 ...
##  $ SU24    : int  2 4 3 2 4 2 2 4 2 4 ...
##  $ ST25    : int  4 2 4 3 4 4 1 4 4 4 ...
##  $ SU26    : int  4 4 4 2 3 2 1 4 4 4 ...
##  $ D27     : int  4 2 3 3 3 5 4 4 5 4 ...
##  $ ST28    : int  4 2 5 3 5 4 1 4 5 2 ...
##  $ SU29    : int  3 3 2 3 3 2 1 2 1 2 ...
##  $ D30     : int  4 3 4 4 3 5 4 3 4 4 ...
##  $ D31     : int  4 4 3 4 4 5 4 4 5 4 ...
##  $ SU32    : int  3 5 5 3 4 3 4 4 3 4 ...
##  $ Ca      : int  2 4 3 3 2 3 4 2 3 2 ...
##  $ Cb      : int  4 4 5 4 4 5 5 4 5 4 ...
##  $ Cc      : int  3 4 4 4 4 4 4 4 4 4 ...
##  $ Cd      : int  4 5 4 4 3 4 4 5 5 5 ...
##  $ Ce      : int  3 5 3 3 3 3 4 3 3 4 ...
##  $ Cf      : int  2 3 4 4 3 4 5 3 3 4 ...
##  $ Cg      : int  3 2 4 4 4 5 5 3 5 4 ...
##  $ Ch      : int  4 4 2 3 4 4 3 3 5 4 ...
##  $ Da      : int  3 4 1 2 3 3 2 2 4 1 ...
##  $ Db      : int  4 3 4 4 4 5 4 4 2 4 ...
##  $ Dc      : int  4 3 4 5 4 4 4 4 4 4 ...
##  $ Dd      : int  5 4 1 2 4 4 5 3 5 2 ...
##  $ De      : int  4 3 4 5 4 4 5 4 4 2 ...
##  $ Df      : int  2 2 1 1 2 3 1 1 4 1 ...
##  $ Dg      : int  4 3 3 5 5 4 4 4 5 1 ...
##  $ Dh      : int  3 3 1 4 5 3 4 1 4 1 ...
##  $ Di      : int  4 2 1 2 3 3 2 1 4 1 ...
##  $ Dj      : int  4 4 5 5 3 5 4 5 2 4 ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ Attitude: int  37 31 25 35 37 38 35 29 38 21 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...

Acces the dplyr library:

library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union

Combinig the questions related to deep, surface and strategic learning.

deep_questions <- c("D03", "D11", "D19", "D27", "D07", "D14", "D22", "D30","D07","D14","D22","D30")
surface_questions <-c("SU02","SU10","SU18","SU26", "SU05","SU13","SU21","SU29","SU08","SU16","SU24","SU32")
strategic_questions <- c("ST01","ST09","ST17","ST25","ST04","ST12","ST20","ST28")

Selecting the columns for “deep” and taking average to scale the variable.

deep_columns <- select(learning2014, one_of(deep_questions))
learning2014$deep <- rowMeans(deep_columns)

Selecting the columns for “surf” and taking average to scale the variable.

surface_columns <- select(learning2014, one_of(surface_questions))
learning2014$surf <- rowMeans(surface_columns)

Selecting the columns for “stra” and taking average to scale the variable.

strategic_columns <- select(learning2014, one_of(strategic_questions))
learning2014$stra <- rowMeans(strategic_columns)

Selecting the variables gender, age, attitude towards statistics, exam points and the three new summary variables to dataset.

keep_columns <- c("gender", "Age", "Attitude", "deep", "stra", "surf", "Points")
new_learning2014 <- select(learning2014, one_of(keep_columns))

Removing the observations with 0 exam points:

new_learning2014 <- filter(new_learning2014, Points > 0)

Testing: The dimension and structure seem to be correct.

dim(new_learning2014)
## [1] 166   7
str(new_learning2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ Attitude: int  37 31 25 35 37 38 35 29 38 21 ...
##  $ deep    : num  3.75 2.88 3.88 3.5 3.75 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...

Saving the dataset:

write.table(new_learning2014, "learning2014.txt", sep = "\t")

Reading the set again and after that testing it is working properly:

str(new_learning2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ Attitude: int  37 31 25 35 37 38 35 29 38 21 ...
##  $ deep    : num  3.75 2.88 3.88 3.5 3.75 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...
dim(new_learning2014)
## [1] 166   7
head(new_learning2014)
##   gender Age Attitude  deep  stra     surf Points
## 1      F  53       37 3.750 3.375 2.583333     25
## 2      M  55       31 2.875 2.750 3.166667     12
## 3      F  49       25 3.875 3.625 2.250000     24
## 4      M  53       35 3.500 3.125 2.250000     10
## 5      M  49       37 3.750 3.625 2.833333     22
## 6      F  38       38 4.875 3.625 2.416667     21

…and everything seems to be ok and I’m able move forward to the analysis part of the exercises.

Analysis

  1. Describing the data

I’m using the dataset from the previous part of the exercise. The data is part of a larger “Approaches to Learning” -survey. Here is used only a few backgroud variables (age and gender), variable measuring global attitude toward statistics, exam points and sum variables measuring student’s deep, surface and strategic learning.

When observations with zero exam points are excluded, the total N of the data is 166.

  1. Descriptive statistics

First thing to do with the data is to see some descriptive statistics.

summary(new_learning2014)
##  gender       Age           Attitude          deep            stra      
##  F:110   Min.   :17.00   Min.   :14.00   Min.   :1.625   Min.   :1.250  
##  M: 56   1st Qu.:21.00   1st Qu.:26.00   1st Qu.:3.500   1st Qu.:2.625  
##          Median :22.00   Median :32.00   Median :3.875   Median :3.188  
##          Mean   :25.51   Mean   :31.43   Mean   :3.796   Mean   :3.121  
##          3rd Qu.:27.00   3rd Qu.:37.00   3rd Qu.:4.250   3rd Qu.:3.625  
##          Max.   :55.00   Max.   :50.00   Max.   :4.875   Max.   :5.000  
##       surf           Points     
##  Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.417   1st Qu.:19.00  
##  Median :2.833   Median :23.00  
##  Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :4.333   Max.   :33.00

There are 110 female and 56 male respondents in the data. Tha average age is 25.5 when the youngest respondent is 17 and the oldest 55 years old. The measured mean attitude towards statistics is 31.4, ranging from 14 to 50. Exam points range (when 0 points excluded) from 7 to 33 and the average is 22.7. The summary variables measuring deep, strategic and surface learning are scaled from 1 to 5. In deep learning the averige is 3.8 (minimum 1.6, maximum 4.9), strategic learning 3.1 (min 1.3, max 5.0) and surface learning 2.8 (min 1.6, max 4.3).

As a graphical overview I made a scatter plot of all the variables according to background variables.

plot(new_learning2014$Attitude, new_learning2014$Age, col=new_learning2014$gender, title("Figure 1: Attitude according to age and gender"))

Males and older respondents seem to have more attitude towards statistics.

plot(new_learning2014$Points, new_learning2014$Age, col=new_learning2014$gender, title("Figure 2: Exam points according to age and gender"))

There is a lot of variation in exam points, no clear trends visible. Especially in the lower points there are both genders and respondents of all ages represent.

plot(new_learning2014$deep, new_learning2014$Age, col=new_learning2014$gender, title("Figure 3: Deep learning according to age and gender"))

In deep learning there is a clear impact of age: older respondets tend to use learning methods that are connected with deep learning.

plot(new_learning2014$stra, new_learning2014$Age, col=new_learning2014$gender, title("Figure 4: Strategic learning according to age and gender"))

In strategic learning the impact of age is not at visible as above, but some connection is visible.

plot(new_learning2014$surf, new_learning2014$Age, col=new_learning2014$gender, title("Figure 5: Surface learning according to age and gender"))

Surface learning is less usual than the other two, but surprisingly there are no clear differences according to age nor gender.

  1. Regression model

Creating a regression model with three explanatory variables (attitude, deep learning and strategic learning). The dependent variable is exam points. Checking the summary of the model.

rg_model <- lm(Points ~ Attitude + deep + stra, data = new_learning2014)
summary(rg_model)
## 
## Call:
## lm(formula = Points ~ Attitude + deep + stra, data = new_learning2014)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -17.525  -3.342   0.572   3.902  11.646 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 12.05207    3.43928   3.504 0.000592 ***
## Attitude     0.35116    0.05654   6.211 4.27e-09 ***
## deep        -0.90243    0.72430  -1.246 0.214584    
## stra         0.97852    0.53610   1.825 0.069805 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.28 on 162 degrees of freedom
## Multiple R-squared:  0.2124, Adjusted R-squared:  0.1978 
## F-statistic: 14.56 on 3 and 162 DF,  p-value: 1.925e-08

The summary table tells us that when attitude rises one point the exam point increses by 0.35 points. The similar connecton with deep learning is -0.90 and with strategic learning 0.98.

The only statistically significant variable is the attitude. I tried surface learning and both age and gender as well, but they were not statistically significant in the analysis. In this case I remove all the other variables but the attitude.

  1. Summary of the fitted model

The final regression model is simple with only one independet, explanatory variable.

rg_model2 <- lm(Points ~ Attitude, data = new_learning2014)
summary(rg_model2)
## 
## Call:
## lm(formula = Points ~ Attitude, data = new_learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 11.63715    1.83035   6.358 1.95e-09 ***
## Attitude     0.35255    0.05674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

The effect of attitude stays the same as above: when the level of attitude grows one point, the exam points increse by 0.35 points. The effect is not very strong, but it is statistically significant, which means that there is very low change to make the wrong assumption about the relationship of these two variables.

  1. Diagnostic plots

Residuals vs. Fitted values:

In this diagnostic plot we examine if the errors are just errors and not depending on the explanatory variable. This means that there should not be any kind of pattern in the plot - and luckily there is not.

plot(rg_model2, which = 1)

Normal QQ-plot:

The plot shows that the residuals are close enough normally distributed. The residuals follow the line quite well.

plot(rg_model2, which = 2)

Residuals vs. Leverage:

In this plot we examine the impact of an individual observation on the model, basically by checking how far from the average are the (possible) observations that do not fit in the model. In this case there is nothing to worry.

plot(rg_model2, which = 5)


Chapter 3: Logistic regression

The data

The data set used in this exercise is joined dataset about student alcohol consumption. The data and some information about it is available here. The data set is joined and motified in the previous data wrangling exercise. The RScript is available here. We start by reading the dataset from the file.

alc <- read.csv("/Users/tlaurone/GitHub/tinalauronen/IODS-project/data/alc.csv")

In the joined data set there are 35 variables and N = 382. The variable names are the following:

colnames(alc)
##  [1] "X"          "school"     "sex"        "age"        "address"   
##  [6] "famsize"    "Pstatus"    "Medu"       "Fedu"       "Mjob"      
## [11] "Fjob"       "reason"     "nursery"    "internet"   "guardian"  
## [16] "traveltime" "studytime"  "failures"   "schoolsup"  "famsup"    
## [21] "paid"       "activities" "higher"     "romantic"   "famrel"    
## [26] "freetime"   "goout"      "Dalc"       "Walc"       "health"    
## [31] "absences"   "G1"         "G2"         "G3"         "alc_use"   
## [36] "high_use"

We also need some libraries to conduct the analyses.

library(tidyr); library(dplyr); library(ggplot2); library(gmodels)

The variables and hyphothesis

The dependent variable is high alcohol consumption. It is a dichotomy variable (true/false) counted from the variables measuring weekly and daily alcohol consumption (scale from 1 - very low to 5 - very high). If the average of those alcohol consumption variables is more than 2, the respondent is classified in the high consumption category. From the 382 respondents 114 are in the high usage category and 268 are not.

The independent (explanatory) variables chosen to this analysis are sex, mother’s education, going out with friends and romantic relationship.

Distributions and preliminary analysis

There are 198 females and 184 males in the data. The assumption is that males consume more alcohol. Mother’s education is measured on a five step scale (0 = no education, 1 = primary education, 2 = 5th or 9th grade, 3 = secondary education, 4 = higher education). The assumption is that the more educated the mother is, the less the student uses alcohol.

Going out with friends is measured by a scale from 1 (very low) to 5 (very heigh). The assumption is that the more student goes out with friends, the more s/he uses alcohol. Romantic relationship is measured as a simple dichotomy, yes or no. The assumption is that romantic relationship decreases the alcohol consumption.

The connection between sex and high alcohol consumption is easiest to explore by cross tabulation. Below we can see, that 39.1 % of male respondents are considered high users, compared to only 21.2 % of females. Of all high users 63.2 % are male and 36.8 % female. The results are statistically significant (p<0.001).

CrossTable(alc$high_use, alc$sex, prop.r = TRUE, prop.c = TRUE, prop.t = FALSE, prop.chisq = FALSE, chisq = TRUE)
## 
##  
##    Cell Contents
## |-------------------------|
## |                       N |
## |           N / Row Total |
## |           N / Col Total |
## |-------------------------|
## 
##  
## Total Observations in Table:  382 
## 
##  
##              | alc$sex 
## alc$high_use |         F |         M | Row Total | 
## -------------|-----------|-----------|-----------|
##        FALSE |       156 |       112 |       268 | 
##              |     0.582 |     0.418 |     0.702 | 
##              |     0.788 |     0.609 |           | 
## -------------|-----------|-----------|-----------|
##         TRUE |        42 |        72 |       114 | 
##              |     0.368 |     0.632 |     0.298 | 
##              |     0.212 |     0.391 |           | 
## -------------|-----------|-----------|-----------|
## Column Total |       198 |       184 |       382 | 
##              |     0.518 |     0.482 |           | 
## -------------|-----------|-----------|-----------|
## 
##  
## Statistics for All Table Factors
## 
## 
## Pearson's Chi-squared test 
## ------------------------------------------------------------
## Chi^2 =  14.62517     d.f. =  1     p =  0.000131151 
## 
## Pearson's Chi-squared test with Yates' continuity correction 
## ------------------------------------------------------------
## Chi^2 =  13.78187     d.f. =  1     p =  0.0002053081 
## 
## 

To present the distribution of the other independent variables and their connections to the dependent variable, we use simple bar plots.

In Figure 1 there is the distribution of variable mother’s education presented. By adding the high alcohol use variable in the figure, it is possible to see that there is some connection between the variables: compared to primary education it is less likely that children of bit more educated mothers are less likely high users of alcohol. Secondary education increses the share of high drinkers, but again the higher education seems to have connection to less usage.

qplot(Medu, data = alc, color = high_use, main = "Figure 1: Mother's education")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

Figure 2 presents the frequencies of the variable going out with friends and its connections to high alcohol usage. As assumed, the more students go out with friends the more likely they use more alcohol.

qplot(goout, data = alc, color = high_use, main = "Figure 2: Going out with friends")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

In Figure 3 there is no clear, visible diffenrence in high alcohol consumption an romantic relationships because the amount of students not in a relationship is so much higher.

qplot(romantic, data = alc, color = high_use, main = "Figure 3: Having romantic relationship")

Simple cross tabulations or bar plot are not sufficient methods to approach a question as complicated as high alcohol consumption. It is necessary to move forward to more sophisticated methods.

Logistic regression analysis

Using the variables presented above we conduct a logistic regression analysis to explore high alcohol usage. First we make the model and print the summary of it, then we compute and print odds ratios (OR) and confidence intervals (CI).

r_model <- glm(high_use ~ sex + Medu + goout + romantic, data = alc, family = "binomial")
summary.glm(r_model)
## 
## Call:
## glm(formula = high_use ~ sex + Medu + goout + romantic, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.6002  -0.8624  -0.6150   0.8460   2.5276  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -3.63037    0.54119  -6.708 1.97e-11 ***
## sexM         0.87723    0.24830   3.533 0.000411 ***
## Medu        -0.05562    0.11326  -0.491 0.623384    
## goout        0.76382    0.11819   6.463 1.03e-10 ***
## romanticyes -0.11913    0.26591  -0.448 0.654154    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 402.30  on 377  degrees of freedom
## AIC: 412.3
## 
## Number of Fisher Scoring iterations: 4
OR <- coef(r_model) %>% exp
CI <- confint(r_model) %>% exp
## Waiting for profiling to be done...
cbind(OR, CI)
##                     OR       2.5 %     97.5 %
## (Intercept) 0.02650637 0.008790626 0.07374126
## sexM        2.40422513 1.484855219 3.93774659
## Medu        0.94590023 0.757602427 1.18226671
## goout       2.14645372 1.712829640 2.72499480
## romanticyes 0.88769573 0.523308804 1.48804026

From the coefficient table from the summary we can see, that only sex and going out with friends are statistically significant variables explaining the high alcohol usage. From the OR table we are able to interpret that if a student is a male (versus being a female) he is 2.4 times more likely a high user. Going out with friends adds the probability to be a high user as well, but due to the measurement scale of the variable the interpretation is not as straightforward.

If not considering the statistical significance levels it is possible to simplify the interpretation of the OR’s: If the OR is less than 1, it is less likely to be a high user. This would mean in our case that more educated mother and being in a romantic relationship would decrese the probability to use more alcohol. Insted, being a male and going out with friends a lot increses that probability - and the effect is stronger as well. These results are in line with the preassumptions - but there is no reason to claim that in case of mother’s education and romantic relationship.

Predictions

First we create a new regression model with the statistically significant variables, sex and going out with friends from the previous model. Then we make a new variable: first one that contains the prediction, then one that predicts high alcohol usage.

r_m <- glm(high_use ~ sex + goout, data = alc, family = "binomial")
probabilities <- predict(r_m, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = (probability > 0.5))

Then we make a table of the original variable, high alcohol usage, and the variable predicting it.

table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   251   17
##    TRUE     63   51

…and here is the same comparison in a graphical form:

g <- ggplot(alc, aes(x = probability, y = high_use, col = prediction))
g + geom_point()

Here we compute the proportion of inaccurately classified individuals. First we create a function to claculate the training error, the we do the actual math.

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2094241

When testing the function by giving the parameter prob values 0 or 1 (meaning that none of our respondents are high users or that all of them are) the result of the actual variable probability is better (the value of loss_func is smaller). By plain guessing it would have gone (more) wrong.

K-fold cross-validation

Finally we test our model with K-fold cross-validation.

library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = r_m, K = 10)
cv$delta[1]
## [1] 0.2303665

The cross-validation exercise in DataCamp resulted error value of 0.26. Here the error value is approximately 0.23 which means that our model is better.


Chapter 4: Clustering and classification

Data

In this exercise we use dataset “Boston” from the MASS package for R. First we take a look at the structure and dimensions of the data. The data has 14 variables and 506 observations. Detailed information about variables in found here. Apparently the observations are towns of Boston.

## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
## Loading tidyverse: tibble
## Loading tidyverse: readr
## Loading tidyverse: purrr
## Conflicts with tidy packages ----------------------------------------------
## filter(): dplyr, stats
## lag():    dplyr, stats
## select(): dplyr, MASS
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
## [1] 506  14

Graphical overview and summaries

The graphical overview of all the variables does not tell much.

Instead we take a look at the summaries of the variables:

##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Then we check the correlations between the variables. There are strong negative correlations between weighted mean of distances to five Boston employment centres and proportion of non-retail business acres per town, nitrogen oxides concentration (parts per 10 million) and proportion of owner-occupied units built prior to 1940. Strong negative correlation is found as well between lower status of the population (percent) and median value of owner-occupied homes in $1000s. Strong negative correlations are found especially between index of accessibility to radial highways and full-value property-tax rate per $10,000.

Modifying the dataset

First we standardize the dataset and print out the summaries of the scaled data. From the summary we can see, that the means of the scaled variables are zero. This means that the variables are centered.

##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865

From the scaled variable of the per capita crime rate by town we create a categorical variable of the crime rate. We use quantiles as a base of categorization. As a result we get a variable that divides the towns of Boston to low, medium low, medium high and high crime rates.

## crime
##      low  med_low med_high     high 
##      127      126      126      127

Finally we divide the dataset to train (80 per cent) and test (20 per cent) datasets.

Linear Discriminant Analysis (LDA)

Next we conduct LDA on the train set using the categorical crime rate variable as target variable. As a result we draw a LDA (bi)plot.

## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2500000 0.2376238 0.2524752 0.2599010 
## 
## Group means:
##                  zn      indus        chas        nox          rm
## low       0.8890734 -0.9019751 -0.11640431 -0.8485819  0.49436000
## med_low  -0.1089388 -0.3553339  0.05576262 -0.5928329 -0.12540763
## med_high -0.3939194  0.2094853  0.19085920  0.3787043  0.07552006
## high     -0.4872402  1.0170492 -0.04735191  1.0600685 -0.40918594
##                 age        dis        rad        tax     ptratio
## low      -0.8691996  0.8161715 -0.6998714 -0.7442585 -0.41072487
## med_low  -0.3743086  0.3381099 -0.5523924 -0.4877614 -0.09878629
## med_high  0.4282763 -0.3891904 -0.4177712 -0.3039529 -0.20362070
## high      0.8215287 -0.8583643  1.6388211  1.5145512  0.78158339
##               black       lstat        medv
## low       0.3789216 -0.78701086  0.57711511
## med_low   0.3525325 -0.15404466  0.02871215
## med_high  0.1123583  0.03276762  0.12403068
## high     -0.8229299  0.88127269 -0.68173081
## 
## Coefficients of linear discriminants:
##                  LD1         LD2         LD3
## zn       0.089817591  0.59533732 -0.95914789
## indus    0.012756117 -0.38341216  0.09901529
## chas    -0.092534062 -0.04440463  0.16581539
## nox      0.418771864 -0.68463165 -1.31335142
## rm      -0.099805509 -0.08434489 -0.19921534
## age      0.250766268 -0.39682760 -0.13046924
## dis     -0.062207153 -0.23696215  0.11078385
## rad      3.154057574  0.88330560 -0.13127986
## tax     -0.001347437  0.15442029  0.76048458
## ptratio  0.141993361 -0.03173391 -0.38682081
## black   -0.157804451  0.01308245  0.17129648
## lstat    0.204062083 -0.19696556  0.33832365
## medv     0.180249256 -0.31513496 -0.23749326
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9526 0.0349 0.0125
## function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choises = c(1,2)){
##     heads <- coef(x)
##     arrows(x0 = 0, y0 = 0,
##            x1 = myscale * heads[,choises[1]],
##            y1 = myscale * heads[,choises[2]], col = color, length = arrow_heads)
##     text(myscale * heads[,choises], labels = row.names(heads),
##          cex = tex, col = color, pos = 3)
## }

Predicting the classes with LDA model

The categorical crime rate variable is removed from the test set. Now we predict the classes in the test data with the LDA model. From the crosstabulation on the correct and predicted obseravtions we are able to see that the prediction is mainly usable. The high rates all all prideicted right, and almost all the medium high observations as well. The medium low predictions are the most problematic.

##           predicted
## correct    low med_low med_high high
##   low       16      10        0    0
##   med_low    5      16        9    0
##   med_high   1       5       17    1
##   high       0       0        0   22

K-means

Finally we reload the Boston dataset and standardize it. Then we calculate the distances between the observations. Summary of the Euclidean distances is presented below.

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4620  4.8240  4.9110  6.1860 14.4000

Then we run k-means algorithm, use WCSS to find out the optimal number of clusters, and run the algorithm again. First we try k-means with four clusters. From the WCSS plot we find out that the optimal number of clusters is two.

From the later figure it would be possible to ivestigate the observations divided to two clusters according to every variable within the analysis. Personally I do not find this kind of giant figure matrixes very informative, bacause their readability is very low. Maybe this would be helpful when making decisions of further analysis. If the colours in the figure are very mixed, the variables are not very significant considering the clusters. If the colours seems to be well separated, the variables in the figure are affecting in the formulation of the clusters.


Chapter 5: Dimensionality reduction techniques

The data

The data used in this analysis is a combination of two data set collected by UNDP. There are more information of the data in this web page. The observations are countries (total 155) and variables used here are Life expectancy at birth, Expected years of schooling, Gross national income (GNI) per capita, Maternal mortality ratio, Adolescent birth rate, Share of seats in parliament (female), Population with at least some secondary education (female/male ratio) and Labour force participation rate (female/male ratio).

Some libraries are needed to conduct the analysis.

First we read the data and explore it. As supposed, there are 155 observations (countries) and 8 variables.

## [1] 155   8
## 'data.frame':    155 obs. of  8 variables:
##  $ Educ_sec_FM_ratio: num  1.007 0.997 0.983 0.989 0.969 ...
##  $ Labour_FM_ratio  : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Educ_e           : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ Life_e           : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ GNI              : int  64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
##  $ MM_ratio         : int  4 6 6 5 6 7 9 28 11 8 ...
##  $ Ad_birth_rate    : num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ Rep_parl         : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...

Overview

Summary of the data and explanations for the variable names:

##  Educ_sec_FM_ratio Labour_FM_ratio      Educ_e          Life_e     
##  Min.   :0.1717    Min.   :0.1857   Min.   : 5.40   Min.   :49.00  
##  1st Qu.:0.7264    1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30  
##  Median :0.9375    Median :0.7535   Median :13.50   Median :74.20  
##  Mean   :0.8529    Mean   :0.7074   Mean   :13.18   Mean   :71.65  
##  3rd Qu.:0.9968    3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25  
##  Max.   :1.4967    Max.   :1.0380   Max.   :20.20   Max.   :83.50  
##       GNI            MM_ratio      Ad_birth_rate       Rep_parl    
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50
Label Variable
Educ_sec_FM_ratio Population with at least some secondary education (female/male ratio)
Labour_FM_ratio Labour force participation rate (female/male ratio)
Educ_e Expected years of schooling
Life_e Life expectancy at birth
GNI Gross national income (GNI) per capita
MM_ratio Maternal mortality ratio
Ad_birth_rate Adolescent birth rate
Rep_parl Share of seats in parliament (female)

As expected, the observations (countries) differ each other radically. It is more interesting to investigate correlations between the variables. Strong positive correlations are found between Expected years of schooling and Life expectancy at birth, and between Maternal mortality ratio and Adolescent birth rate. An obvious strong negative correlation is found between Life expectancy at birth and Maternal mortality ratio. All in all the pattern looks quite clear: the more educated women (in addition to mens’ education), the higher education expectancy, life expectancy and GNI and lower maternal mortality and adolescent birth rate.

Principal component analysis (PCA)

Then we conduct a principal component analysis.

## Importance of components:
##                              PC1      PC2   PC3   PC4   PC5   PC6    PC7
## Standard deviation     1.854e+04 185.5219 25.19 11.45 3.766 1.566 0.1912
## Proportion of Variance 9.999e-01   0.0001  0.00  0.00 0.000 0.000 0.0000
## Cumulative Proportion  9.999e-01   1.0000  1.00  1.00 1.000 1.000 1.0000
##                           PC8
## Standard deviation     0.1591
## Proportion of Variance 0.0000
## Cumulative Proportion  1.0000
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

The results are almost unreadable, hopefully because of use of unstandardized data. We must scale the data and conduct the analysis again.

Principal component analysis (PCA) with standardized data

First we scale the data and check the summary of it.

##  Educ_sec_FM_ratio Labour_FM_ratio       Educ_e            Life_e       
##  Min.   :-2.8189   Min.   :-2.6247   Min.   :-2.7378   Min.   :-2.7188  
##  1st Qu.:-0.5233   1st Qu.:-0.5484   1st Qu.:-0.6782   1st Qu.:-0.6425  
##  Median : 0.3503   Median : 0.2316   Median : 0.1140   Median : 0.3056  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5958   3rd Qu.: 0.7350   3rd Qu.: 0.7126   3rd Qu.: 0.6717  
##  Max.   : 2.6646   Max.   : 1.6632   Max.   : 2.4730   Max.   : 1.4218  
##       GNI             MM_ratio       Ad_birth_rate        Rep_parl      
##  Min.   :-0.9193   Min.   :-0.6992   Min.   :-1.1325   Min.   :-1.8203  
##  1st Qu.:-0.7243   1st Qu.:-0.6496   1st Qu.:-0.8394   1st Qu.:-0.7409  
##  Median :-0.3013   Median :-0.4726   Median :-0.3298   Median :-0.1403  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.3712   3rd Qu.: 0.1932   3rd Qu.: 0.6030   3rd Qu.: 0.6127  
##  Max.   : 5.6890   Max.   : 4.4899   Max.   : 3.8344   Max.   : 3.1850

The means are 0.0 so we can move on to PCA analysis.

## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6
## Standard deviation     2.0708 1.1397 0.87505 0.77886 0.66196 0.53631
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595
## Cumulative Proportion  0.5361 0.6984 0.79413 0.86996 0.92473 0.96069
##                            PC7     PC8
## Standard deviation     0.45900 0.32224
## Proportion of Variance 0.02634 0.01298
## Cumulative Proportion  0.98702 1.00000

Now the results make sense and are much easier to interpret. From the summary we can see, that the first principal component (PC1) covers 53.6 per cent of the variation. PC2 covers 16.2 per cent. With unstandardized data some of the variables dominated the analysis with their scale differing the others, but now be are able to see how the countries are located according to principal components.

From the arrows of the biplot we can interpret the content of the principal components: PC2 consists of variables considering Labour force participation rate (female/male ratio) and Share of seats in parliament (female). It can be interpret as some kind of gender equality component. The more important PC1 consists on the other hand Maternal mortality ratio and Adolescent birth rate, and on the other hand Expected years of schooling, Life expectancy at birth, Gross national income (GNI) per capita and Population with at least some secondary education (female/male ratio). The variables are more related to health issues, especially to women’s health, and the variables correlated to them.

According to this analysis the old thesis about girls’ education as the best investment for overall well-being gets support.

MCA analysis with tea data

For MCA analysis we load the “Tea” data set from R package “FactoMineR”. First we explore the data set. There are many variables we are not interested in, so we reduce the amount of them.

## [1] 300  36
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
##          breakfast           tea.time          evening          lunch    
##  breakfast    :144   Not.tea time:131   evening    :103   lunch    : 44  
##  Not.breakfast:156   tea time    :169   Not.evening:197   Not.lunch:256  
##                                                                          
##                                                                          
##                                                                          
##                                                                          
##                                                                          
##         dinner           always          home           work    
##  dinner    : 21   always    :103   home    :291   Not.work:213  
##  Not.dinner:279   Not.always:197   Not.home:  9   work    : 87  
##                                                                 
##                                                                 
##                                                                 
##                                                                 
##                                                                 
##         tearoom           friends          resto          pub     
##  Not.tearoom:242   friends    :196   Not.resto:221   Not.pub:237  
##  tearoom    : 58   Not.friends:104   resto    : 79   pub    : 63  
##                                                                   
##                                                                   
##                                                                   
##                                                                   
##                                                                   
##         Tea         How           sugar                     how     
##  black    : 74   alone:195   No.sugar:155   tea bag           :170  
##  Earl Grey:193   lemon: 33   sugar   :145   tea bag+unpackaged: 94  
##  green    : 33   milk : 63                  unpackaged        : 36  
##                  other:  9                                          
##                                                                     
##                                                                     
##                                                                     
##                   where                 price          age        sex    
##  chain store         :192   p_branded      : 95   Min.   :15.00   F:178  
##  chain store+tea shop: 78   p_cheap        :  7   1st Qu.:23.00   M:122  
##  tea shop            : 30   p_private label: 21   Median :32.00          
##                             p_unknown      : 12   Mean   :37.05          
##                             p_upscale      : 53   3rd Qu.:48.00          
##                             p_variable     :112   Max.   :90.00          
##                                                                          
##            SPC               Sport       age_Q          frequency  
##  employee    :59   Not.sportsman:121   15-24:92   1/day      : 95  
##  middle      :40   sportsman    :179   25-34:69   1 to 2/week: 44  
##  non-worker  :64                       35-44:40   +2/day     :127  
##  other worker:20                       45-59:61   3 to 6/week: 34  
##  senior      :35                       +60  :38                    
##  student     :70                                                   
##  workman     :12                                                   
##              escape.exoticism           spirituality        healthy   
##  escape-exoticism    :142     Not.spirituality:206   healthy    :210  
##  Not.escape-exoticism:158     spirituality    : 94   Not.healthy: 90  
##                                                                       
##                                                                       
##                                                                       
##                                                                       
##                                                                       
##          diuretic             friendliness            iron.absorption
##  diuretic    :174   friendliness    :242   iron absorption    : 31   
##  Not.diuretic:126   Not.friendliness: 58   Not.iron absorption:269   
##                                                                      
##                                                                      
##                                                                      
##                                                                      
##                                                                      
##          feminine             sophisticated        slimming  
##  feminine    :129   Not.sophisticated: 85   No.slimming:255  
##  Not.feminine:171   sophisticated    :215   slimming   : 45  
##                                                              
##                                                              
##                                                              
##                                                              
##                                                              
##         exciting          relaxing              effect.on.health
##  exciting   :116   No.relaxing:113   effect on health   : 66    
##  No.exciting:184   relaxing   :187   No.effect on health:234    
##                                                                 
##                                                                 
##                                                                 
##                                                                 
## 
##         Tea         How                      how           sugar    
##  black    : 74   alone:195   tea bag           :170   No.sugar:155  
##  Earl Grey:193   lemon: 33   tea bag+unpackaged: 94   sugar   :145  
##  green    : 33   milk : 63   unpackaged        : 36                 
##                  other:  9                                          
##                   where           lunch    
##  chain store         :192   lunch    : 44  
##  chain store+tea shop: 78   Not.lunch:256  
##  tea shop            : 30                  
## 
## 'data.frame':    300 obs. of  6 variables:
##  $ Tea  : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How  : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ how  : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## Warning: attributes are not identical across measure variables; they will
## be dropped

Then we conduct a multiple correspondence analysis MCA, print the summary of it and visualize it.

## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
## Variance               0.279   0.261   0.219   0.189   0.177   0.156
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953
##                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.144   0.141   0.117   0.087   0.062
## % of var.              7.841   7.705   6.392   4.724   3.385
## Cumulative % of var.  77.794  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898
##                       cos2  v.test     Dim.3     ctr    cos2  v.test  
## black                0.003   0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            0.027   2.867 |   0.433   9.160   0.338  10.053 |
## green                0.107  -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone                0.127  -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                0.035   3.226 |   1.329  14.771   0.218   8.081 |
## milk                 0.020   2.422 |   0.013   0.003   0.000   0.116 |
## other                0.102   5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag              0.161  -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged   0.478  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged           0.141  -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |

The MDA biplot can be interpretet as a taste map of tea consumption. The first dimension tells us about experimentality or safety of the respondents’s tea drinking habits: on the other side of the scale there are people who drink Earl Grey, buy their tea from the chain store in tea bags (and this is very common, close to zero) and on the other side there are specialized tea shops and unpackaged green or black tea.

The second dimension is a bit harder to interpret, but on the upper side of the figure there are some traces of omnivorousness. Everything goes: the chain stores and the tea shops, unpacked and teapacks, and even “other” things than milk or lemon to add to tea. On the lower part of the figure the choises can be interpret more traditional.

As an overall interpretation it is easiest to say that the closer to the zero point of the two dimensions a variable is, the more common or “mainstream” it is.